-
Notifications
You must be signed in to change notification settings - Fork 132
Shell completion auto-install and pre-commit hook improvements #1124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add lint-gate job to test.yml that runs fast checks (formatting, spelling, linting) before expensive test matrix and HPC jobs start - Add concurrency groups to test.yml, coverage.yml, cleanliness.yml, and bench.yml to cancel superseded runs on new pushes - Add ./mfc.sh precheck command for local CI validation before pushing Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add cross-platform hash function (macOS uses md5, Linux uses md5sum) - Validate -j/--jobs argument (require value, must be numeric) - Improve error messages with actionable guidance - Clarify that formatting has been auto-applied when check fails Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add wait-for-tests job that polls GitHub API to ensure: - Lint Gate passes first (fast fail) - All Github test jobs complete successfully - Only then do benchmark jobs start This prevents wasting HPC resources on benchmarking code that fails tests, while preserving the existing maintainer approval gate. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add .githooks/pre-commit that runs ./mfc.sh precheck before commits - Auto-install hook on first ./mfc.sh invocation (symlinks to .git/hooks/) - Hook only installs once; subsequent runs skip if already present - Developers can bypass with: git commit --no-verify Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Auto-detect available CPUs for parallel formatting: - Linux: nproc - macOS: sysctl -n hw.ncpu - Fallback: 4 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Keep both: pre-commit hook auto-install and 'Starting...' log message Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Register precheck in commands.py so it appears in: - Shell tab completion - CLI documentation - ./mfc.sh precheck --help Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When completion scripts are auto-regenerated, also update the installed completions at ~/.local/share/mfc/completions/ if they exist. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When installed shell completions are auto-updated, print a message with the appropriate source command for the user's detected shell. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Previously, installed completions only updated when source files changed and regeneration occurred. Now we also check if the installed completions are older than the generated ones (e.g., after git pull brings new pre-generated completions). Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove -o bashdefault from bash complete command to prevent falling back to directory completion when no matches found - Add explicit : (no-op) for zsh commands without arguments to prevent default file/directory completion Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Auto-install completions on first mfc.sh run (via main.py) - Add -o filenames back to bash complete (needed for file completion) - Keep -o bashdefault removed to prevent directory fallback - Simplify code by having __update_installed_completions handle both install and update cases Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Move completion auto-install to mfc.sh so it runs for ALL commands including help, precheck, etc. This ensures completions are always set up on first run. - Install completion files to ~/.local/share/mfc/completions/ - Add source line to .bashrc or fpath to .zshrc - Tell user to restart shell or source the file - main.py now only handles updates when generated files change Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- -v/-vv/-vvv: output verbosity levels - --debug: build with debug compiler flags (for MFC Fortran code) - --debug-log/-d: Python toolchain debug logging (not MFC code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Avoid hogging resources on machines with many cores. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Previously checked if ~/.local/share/mfc/completions/ existed. Now checks if the actual completion file exists for the user's shell. This handles the edge case of an empty completions directory. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
CodeAnt AI is reviewing your PR. Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
You have run out of free Bugbot PR reviews for this billing cycle. This will reset on March 8. To receive reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
📝 WalkthroughWalkthroughCentralizes shell-completion installation into toolchain/bootstrap/completions.sh, standardizes quoted argument forwarding across bootstrap scripts, adds verbose install output to the Python bootstrap, refactors help/error formatting, updates many GitHub Actions triggers (bench now responds to Test Suite workflow_run), and makes multiple SLURM/CI script robustness and minor fixes. Changes
Sequence Diagram(s)sequenceDiagram
participant TestSuite as Test Suite (workflow_run)
participant BenchWF as bench.yml workflow
participant FileChanges as file-changes job
participant GHAPI as GitHub API
participant Matrix as Matrix jobs
TestSuite->>BenchWF: emit workflow_run (conclusion: success)
BenchWF->>FileChanges: start (uses workflow_run.head_sha)
FileChanges->>GHAPI: Get PR info (pr_number, author, approvals)
GHAPI-->>FileChanges: pr_number / pr_author / pr_approved
FileChanges-->>BenchWF: outputs (checkall, pr_number, pr_approved)
BenchWF->>Matrix: trigger matrix jobs (ref = head_sha or fallback)
Matrix->>Matrix: clone repo and run build/test
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
|
CodeAnt AI finished reviewing your PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR implements follow-up improvements to the CI and developer experience workflow introduced in #1122, focusing on shell completion auto-installation, completion auto-updates, and pre-commit hook enhancements.
Changes:
- Auto-install shell completions to
~/.local/share/mfc/completions/on first./mfc.shrun, with automatic rc file configuration for bash/zsh - Auto-update installed completions when generated files are newer than installed copies
- Enhanced pre-commit hook with parallelism capped at 12 jobs and visible job count in output
- Fixed bash/zsh tab completion to prevent unwanted directory fallback by removing
-o bashdefault - Clarified documentation distinguishing
--debug(compiler flags),-v/-vv/-vvv(verbosity), and--debug-log(toolchain debugging)
Reviewed changes
Copilot reviewed 1 out of 1 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
mfc.sh |
Auto-install shell completions and pre-commit hook on first run, with rc file modifications |
toolchain/main.py |
Auto-update installed completions when regenerated files are newer |
toolchain/mfc/cli/completion_gen.py |
Remove -o bashdefault from bash completions and add explicit no-op for empty zsh commands |
toolchain/bootstrap/precheck.sh |
New precheck script for local CI validation before commits |
toolchain/mfc/cli/commands.py |
Add PRECHECK_COMMAND definition and clarify flag documentation |
toolchain/mfc/cli/docs_gen.py |
Update documentation for debug and verbosity flags |
.githooks/pre-commit |
New pre-commit hook with capped parallelism (max 12 jobs) |
.github/workflows/test.yml |
Add lint-gate job and concurrency groups |
.github/workflows/coverage.yml |
Add concurrency configuration |
.github/workflows/cleanliness.yml |
Add concurrency configuration |
.github/workflows/bench.yml |
Add concurrency and wait-for-tests job with polling logic |
Comments suppressed due to low confidence (2)
mfc.sh:42
- The shell completion auto-install logic modifies user's shell rc files (.bashrc/.zshrc) without explicit user consent. This could be considered intrusive. Consider either:
- Prompting the user before modifying their shell rc files, or
- Printing a clear message explaining what was added and how to remove it if desired.
Additionally, the check on line 38 only verifies if COMPLETION_DIR exists in the rc file, which could produce false positives if the directory path appears in an unrelated context.
if [ -f "$RC_FILE" ] && ! grep -q "$COMPLETION_DIR" "$RC_FILE" 2>/dev/null; then
echo "" >> "$RC_FILE"
echo "# MFC shell completion" >> "$RC_FILE"
echo "$RC_LINE" >> "$RC_FILE"
fi
mfc.sh:16
- The symlink creation could fail silently if the user doesn't have write permissions to .git/hooks/ or if the target doesn't exist. Consider adding error handling to inform the user if the hook installation fails.
if [ -d "$(pwd)/.git" ] && [ ! -e "$(pwd)/.git/hooks/pre-commit" ] && [ -f "$(pwd)/.githooks/pre-commit" ]; then
ln -sf "$(pwd)/.githooks/pre-commit" "$(pwd)/.git/hooks/pre-commit"
log "Installed git pre-commit hook (runs$MAGENTA ./mfc.sh precheck$COLOR_RESET before commits)."
Check if installed completions are older than source files and update them automatically. Shows message with source command only on install or update, silent otherwise. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Previously, workflows triggered on both push and pull_request events, causing duplicate CI runs when pushing to PR branches. Now push events only trigger CI on master, while PRs are tested via pull_request event. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The previous fix could incorrectly install zsh completions when running mfc.sh in bash if the user's login shell ($SHELL) was zsh. Now we only fall back to $SHELL when BASH_VERSION is also unset. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @.github/workflows/test.yml:
- Around line 132-136: The workflow uses unquoted command substitution $(nproc)
which triggers shellcheck SC2046/actionlint; update the two command lines that
invoke "/bin/bash mfc.sh test -v ..." to quote the substitution as "$(nproc)" so
the parallelism arg is passed safely (keep $OPT1/$OPT2 unquoted as intended);
locate the occurrences by searching for the string "/bin/bash mfc.sh test -v"
and replace $(nproc) with "$(nproc)".
- Remove verbose flag from GitHub runner and coverage CI commands to prevent build hangs from pipe buffer exhaustion - Pin pyrometheus to known-good commit 4983340 to fix potential chemistry build failures on Frontier Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add retry logic (3 attempts) to Frontier and Frontier AMD build scripts to handle transient Lustre I/O errors during compilation - Replace fragile sbatch -W with monitor_slurm_job.sh in all submit scripts (Frontier, Frontier AMD, Phoenix) for resilient job monitoring with exponential backoff and sacct fallback - Add SIGHUP trap to submit scripts to survive login node session drops Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 issues found across 5 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name=".github/workflows/phoenix/submit.sh">
<violation number="1" location=".github/workflows/phoenix/submit.sh:71">
P2: The job ID parsing is too permissive; any digits in an sbatch error message will be treated as a valid job ID, causing the script to monitor a nonexistent job. Parse only the "Submitted batch job <id>" line.</violation>
</file>
<file name=".github/workflows/frontier_amd/submit.sh">
<violation number="1" location=".github/workflows/frontier_amd/submit.sh:61">
P2: The job ID parsing is too broad; it can capture unrelated numbers from sbatch warnings/errors and continue with a bogus job ID. Parse only the "Submitted batch job" line to avoid monitoring the wrong job.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| ) | ||
|
|
||
| job_id=$(echo "$submit_output" | grep -oE '[0-9]+') | ||
| if [ -z "$job_id" ]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P2: The job ID parsing is too permissive; any digits in an sbatch error message will be treated as a valid job ID, causing the script to monitor a nonexistent job. Parse only the "Submitted batch job " line.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .github/workflows/phoenix/submit.sh, line 71:
<comment>The job ID parsing is too permissive; any digits in an sbatch error message will be treated as a valid job ID, causing the script to monitor a nonexistent job. Parse only the "Submitted batch job <id>" line.</comment>
<file context>
@@ -61,4 +64,18 @@ job_interface="$3"
+)
+
+job_id=$(echo "$submit_output" | grep -oE '[0-9]+')
+if [ -z "$job_id" ]; then
+ echo "ERROR: Failed to submit job. sbatch output:"
+ echo "$submit_output"
</file context>
| EOT | ||
| ) | ||
|
|
||
| job_id=$(echo "$submit_output" | grep -oE '[0-9]+') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P2: The job ID parsing is too broad; it can capture unrelated numbers from sbatch warnings/errors and continue with a bogus job ID. Parse only the "Submitted batch job" line to avoid monitoring the wrong job.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .github/workflows/frontier_amd/submit.sh, line 61:
<comment>The job ID parsing is too broad; it can capture unrelated numbers from sbatch warnings/errors and continue with a bogus job ID. Parse only the "Submitted batch job" line to avoid monitoring the wrong job.</comment>
<file context>
@@ -53,4 +56,17 @@ job_interface="$3"
EOT
+)
+
+job_id=$(echo "$submit_output" | grep -oE '[0-9]+')
+if [ -z "$job_id" ]; then
+ echo "ERROR: Failed to submit job. sbatch output:"
</file context>
| job_id=$(echo "$submit_output" | grep -oE '[0-9]+') | |
| job_id=$(echo "$submit_output" | awk '/Submitted batch job/ {print $4}') |
| EOT | ||
| ) | ||
|
|
||
| job_id=$(echo "$submit_output" | grep -oE '[0-9]+') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: The job ID extraction uses grep -oE '[0-9]+' directly on the sbatch output, which returns every numeric substring on separate lines; if the output ever contains more than one number, job_id will contain multiple IDs separated by newlines, causing the monitoring script to receive an invalid job identifier and fail to track the job correctly. Limiting the extraction to the first numeric match avoids this and ensures job_id is a single valid token. [logic error]
Severity Level: Major ⚠️
- ❌ Array jobs monitored with malformed, multi-token SLURM job IDs.
- ⚠️ Frontier submit wrapper unreliable for scripts using --array.
- ⚠️ Downstream monitoring scripts may misreport job status.| job_id=$(echo "$submit_output" | grep -oE '[0-9]+') | |
| job_id=$(echo "$submit_output" | grep -oE '[0-9]+' | head -n1) |
Steps of Reproduction ✅
1. In the repo root, create a SLURM batch script `array_job.sh` that defines an array job,
e.g.:
- `#SBATCH --array=0-3`
This script will later be passed as `$1` to `.github/workflows/frontier/submit.sh`.
2. On a SLURM system, run the wrapper:
- `.github/workflows/frontier/submit.sh array_job.sh cpu`
This executes the logic in `.github/workflows/frontier/submit.sh` including the
`sbatch` submission block and captures `sbatch` output into `submit_output`.
3. SLURM prints a message like `Submitted batch job 123456_0` (typical for array jobs) to
stdout. The script at `.github/workflows/frontier/submit.sh:31-58` captures this into
`submit_output`, then the block at lines 61-66 executes:
- `job_id=$(echo "$submit_output" | grep -oE '[0-9]+')`
`grep -oE '[0-9]+'` returns two lines: `123456` and `0`, so `job_id` contains a
newline-separated list of two numeric tokens.
4. The monitoring code at `.github/workflows/frontier/submit.sh:71-72` runs:
- `bash "$SCRIPT_DIR/../../scripts/monitor_slurm_job.sh" "$job_id" "$output_file"`
Because `"$job_id"` contains a newline, `monitor_slurm_job.sh` receives an invalid job
identifier (either as a value with embedded newline or as multiple numeric tokens,
depending on its parsing). This causes incorrect monitoring behavior for the submitted
job array (e.g., failing SLURM commands like `squeue -j` or tracking the wrong job),
demonstrating that extracting all numeric substrings from `submit_output` produces an
invalid job ID when multiple numbers are present.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** .github/workflows/frontier/submit.sh
**Line:** 61:61
**Comment:**
*Logic Error: The job ID extraction uses `grep -oE '[0-9]+'` directly on the `sbatch` output, which returns every numeric substring on separate lines; if the output ever contains more than one number, `job_id` will contain multiple IDs separated by newlines, causing the monitoring script to receive an invalid job identifier and fail to track the job correctly. Limiting the extraction to the first numeric match avoids this and ensures `job_id` is a single valid token.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| EOT | ||
| ) | ||
|
|
||
| job_id=$(echo "$submit_output" | grep -oE '[0-9]+') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: The job ID extraction uses a bare grep -oE '[0-9]+' on the entire sbatch output, so any digits from additional messages (e.g., debug logs or some error texts that still return exit 0) can be misinterpreted as the job ID, causing the monitor to track the wrong job or think submission succeeded when it did not. Restrict the parsing to the standard "Submitted batch job NNNNN" line and extract the numeric field from that line only. [logic error]
Severity Level: Major ⚠️
- ❌ monitor_slurm_job.sh may track wrong SLURM job.
- ⚠️ Frontier CI runs may be misreported as successful.| job_id=$(echo "$submit_output" | grep -oE '[0-9]+') | |
| job_id=$(echo "$submit_output" | grep -E 'Submitted batch job[[:space:]]+[0-9]+' | head -n1 | awk '{print $4}') |
Steps of Reproduction ✅
1. On a Frontier login node, create a simple SLURM script `run.sh` in the repo root that
just echoes a message and exits 0.
2. Run `.github/workflows/frontier_amd/submit.sh run.sh cpu` so that
`.github/workflows/frontier_amd/submit.sh:1-72` executes and calls `sbatch <<EOT` to
submit the job.
3. On clusters where `sbatch` prepends informational or warning lines containing digits
(e.g., maintenance date or policy message) before the standard `Submitted batch job NNNNN`
line, those lines become part of `submit_output` at
`.github/workflows/frontier_amd/submit.sh:34-59`.
4. At `.github/workflows/frontier_amd/submit.sh:61`, `job_id=$(echo "$submit_output" |
grep -oE '[0-9]+')` captures *all* numeric tokens (one per line), so `job_id` becomes a
multi-line string like `2025\n02\n10\n12345`; when passed to
`.github/scripts/monitor_slurm_job.sh` at lines 71-72 as the first argument, the
monitoring script is given an invalid or wrong job ID and will monitor the wrong job or
fail to monitor at all.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** .github/workflows/frontier_amd/submit.sh
**Line:** 61:61
**Comment:**
*Logic Error: The job ID extraction uses a bare `grep -oE '[0-9]+'` on the entire sbatch output, so any digits from additional messages (e.g., debug logs or some error texts that still return exit 0) can be misinterpreted as the job ID, causing the monitor to track the wrong job or think submission succeeded when it did not. Restrict the parsing to the standard "Submitted batch job NNNNN" line and extract the numeric field from that line only.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| EOT No newline at end of file | ||
| EOT | ||
| ) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: The job ID is parsed by grepping for any number in the sbatch output, so if sbatch prints additional numeric tokens (for example in warnings or extra messages), the script may capture multiple numbers or the wrong one, causing the monitoring script to track an invalid job or fail; instead, parse the specific "Submitted batch job" line and extract only the job ID field. [logic error]
Severity Level: Major ⚠️
- ❌ Failed sbatch treated as successful job submission.
- ⚠️ monitor_slurm_job.sh tracks nonexistent or wrong SLURM job.
- ⚠️ Users misled about status of cluster submissions.| job_id=$(echo "$submit_output" | awk '/Submitted batch job/ {print $4; exit}') |
Steps of Reproduction ✅
1. On a SLURM cluster, in the repo root, create a job script `bad_job.sh` whose SBATCH
options cause `sbatch` to reject it but emit an error containing digits (for example an
invalid time limit), ensuring `.github/workflows/phoenix/submit.sh` can read it at lines
12–15.
2. Run the submission helper: `.github/workflows/phoenix/submit.sh bad_job.sh cpu none`;
this calls `sbatch` via the here‑document started at line 40 and ended at line 66,
capturing all `sbatch` stdout/stderr into `submit_output`.
3. Because `sbatch` rejected the job, its output does not contain the normal `Submitted
batch job <id>` line, but does contain numeric tokens in the error message; at line 69,
`grep -oE '[0-9]+'` extracts all digit sequences from that error so `job_id` becomes a
concatenation of those numbers, and the non‑empty check at lines 69–73 passes as if
submission succeeded.
4. The script then prints `Submitted batch job $job_id` (line 76) and invokes
`scripts/monitor_slurm_job.sh` with this bogus job ID at lines 79–80, causing the
monitoring script to watch a non‑existent or wrong job and never report the actual
submission failure.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** .github/workflows/phoenix/submit.sh
**Line:** 69:69
**Comment:**
*Logic Error: The job ID is parsed by grepping for any number in the sbatch output, so if sbatch prints additional numeric tokens (for example in warnings or extra messages), the script may capture multiple numbers or the wrong one, causing the monitoring script to track an invalid job or fail; instead, parse the specific "Submitted batch job" line and extract only the job ID field.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.|
CodeAnt AI Incremental review completed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @.github/workflows/frontier_amd/build.sh:
- Around line 24-26: The variable dirname is assigned but never used inside the
loop (see the for loop over dir and the assignment dirname=$(basename "$dir")),
so remove the unused assignment line to avoid confusion; if the basename was
intended to be used later, either replace occurrences to use dirname or actually
use dirname in relevant commands (e.g., in the call to ./mfc.sh), otherwise
delete the dirname=$(basename "$dir") line.
In @.github/workflows/frontier/build.sh:
- Around line 24-26: The script assigns dirname=$(basename "$dir") inside the
for loop but never uses it; remove the unused assignment (the dirname variable)
from the loop in the build.sh so the loop simply iterates over benchmarks/*/ and
runs ./mfc.sh without creating dirname, or alternatively use dirname in a
log/message if you intended to reference the directory name; update the loop
around the for dir in benchmarks/*/; do and the ./mfc.sh run invocation
accordingly.
In @.github/workflows/phoenix/submit.sh:
- Around line 70-75: The current extraction uses job_id=$(echo "$submit_output"
| grep -oE '[0-9]+') which can capture unrelated digits; change the extraction
to target the "Submitted batch job" line and pull the job number from that line
(e.g., filter submit_output for the line containing "Submitted batch job" then
extract the numeric field) so job_id only contains the actual SLURM ID used by
monitor_slurm_job.sh; update the code that sets job_id and the subsequent
existence check to rely on this targeted extraction (references: variable
job_id, input submit_output, and the downstream monitor_slurm_job.sh
invocation).
🧹 Nitpick comments (1)
.github/workflows/frontier/submit.sh (1)
1-72: Consider extracting the shared submit-and-monitor logic into a common helper.All three submit scripts (
phoenix,frontier,frontier_amd) now share nearly identical post-submission code: output capture → job_id parsing → error handling → monitor invocation. A shared helper (e.g.,.github/scripts/submit_and_monitor.sh) accepting the sbatch output and output_file would reduce the maintenance surface and the chance of future divergence.Not urgent — the current duplication is manageable for three files.
| job_id=$(echo "$submit_output" | grep -oE '[0-9]+') | ||
| if [ -z "$job_id" ]; then | ||
| echo "ERROR: Failed to submit job. sbatch output:" | ||
| echo "$submit_output" | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
grep -oE '[0-9]+' may capture extraneous numbers from sbatch output.
If sbatch emits any warnings or informational messages containing digits before the "Submitted batch job" line, job_id will contain multiple numbers (one per line). This would silently corrupt the argument passed to monitor_slurm_job.sh.
A more targeted extraction avoids this:
Proposed fix
-job_id=$(echo "$submit_output" | grep -oE '[0-9]+')
+job_id=$(echo "$submit_output" | grep -oP 'Submitted batch job \K[0-9]+')If grep -P isn't available on the target system, an alternative:
-job_id=$(echo "$submit_output" | grep -oE '[0-9]+')
+job_id=$(echo "$submit_output" | awk '/Submitted batch job/ {print $NF}')📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| job_id=$(echo "$submit_output" | grep -oE '[0-9]+') | |
| if [ -z "$job_id" ]; then | |
| echo "ERROR: Failed to submit job. sbatch output:" | |
| echo "$submit_output" | |
| exit 1 | |
| fi | |
| job_id=$(echo "$submit_output" | grep -oP 'Submitted batch job \K[0-9]+') | |
| if [ -z "$job_id" ]; then | |
| echo "ERROR: Failed to submit job. sbatch output:" | |
| echo "$submit_output" | |
| exit 1 | |
| fi |
🤖 Prompt for AI Agents
In @.github/workflows/phoenix/submit.sh around lines 70 - 75, The current
extraction uses job_id=$(echo "$submit_output" | grep -oE '[0-9]+') which can
capture unrelated digits; change the extraction to target the "Submitted batch
job" line and pull the job number from that line (e.g., filter submit_output for
the line containing "Submitted batch job" then extract the numeric field) so
job_id only contains the actual SLURM ID used by monitor_slurm_job.sh; update
the code that sets job_id and the subsequent existence check to rely on this
targeted extraction (references: variable job_id, input submit_output, and the
downstream monitor_slurm_job.sh invocation).
The build step runs directly on the login node (not through sbatch), so it also needs trap '' HUP to survive session drops. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The verbose flag was removed based on a pipe buffer theory that turned out to be wrong — the real issue was SIGHUP from login node session drops, which is now fixed. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Verbose output stalls GitHub-hosted runners (pipe buffer issue) but is safe on Frontier/Phoenix where output goes to SLURM files. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The streaming build mode (-v) was reading stdout and stderr sequentially, causing a classic subprocess deadlock when the compiler's stderr filled the 64KB pipe buffer. Fix by merging stderr into stdout with subprocess.STDOUT. Now safe to use -v on all CI runners (GitHub-hosted and self-hosted). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Previously HDF5/Silo/post_process were only built during the Test step, adding build time to test execution. Now the Build step includes --test-all for MPI jobs so all targets are pre-built. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Replace lbl_comp with cluster field that maps to workflow directory names (phoenix, frontier, frontier_amd), making steps generic - Collapse 6 conditional Build/Test steps into 2 generic ones - Merge duplicate Archive Logs blocks into one - Rename OPT1/OPT2 to TEST_ALL/TEST_PCT for clarity Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The pinned commit SHA breaks uv's shallow git fetch on macOS CI. Track HEAD instead since there's no tag at that commit. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 issue found across 1 file (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="toolchain/pyproject.toml">
<violation number="1">
P2: The git dependency is no longer pinned to a specific commit, which makes builds non-reproducible and risks pulling unreviewed upstream changes. Keep a fixed commit or tag to ensure deterministic installs.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Stale build artifacts from a failed attempt cause instant re-failure. Run ./mfc.sh clean before retrying to ensure a fresh build. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
CodeAnt AI Incremental review completed. |
User description
User description
Summary
Follow-up improvements to the CI and developer experience work from #1122:
./mfc.shrun, completions are installed to~/.local/share/mfc/completions/and the source line is added to.bashrc/.zshrc-v/-vv/-vvv(verbosity),--debug(compiler flags), and--debug-log(toolchain debugging)Changes
mfc.sh: Auto-install completions with shell rc setuptoolchain/main.py: Auto-update installed completions when regeneratedtoolchain/mfc/cli/completion_gen.py: Fix bash/zsh completion options.githooks/pre-commit: Cap at 12 jobs, show counttoolchain/mfc/cli/commands.py: Clarify flag documentationTest plan
./mfc.shon fresh clone - completions should auto-install~/.local/share/mfc/completions/and run again - should reinstallbuild/-j Nwhere N ≤ 12🤖 Generated with Claude Code
PR Type
Enhancement, Tests
Description
Add CI lint-gate job and local precheck command for early validation
Auto-install git pre-commit hook and shell completions on first run
Gate benchmarks on test suite completion to prevent wasted HPC resources
Improve documentation clarity for debug, verbosity, and logging flags
Add concurrency groups to CI workflows to cancel superseded runs
Diagram Walkthrough
File Walkthrough
7 files
Auto-update installed completions when regeneratedAdd precheck command and clarify flag documentationAuto-install pre-commit hook and shell completionsNew precheck script for local CI validationNew git pre-commit hook running precheckAdd lint-gate job and concurrency groupsAdd wait-for-tests job and concurrency groups1 files
Fix bash/zsh completion options and prevent directory fallback1 files
Clarify debug and verbosity flag documentation2 files
Add concurrency groups to cancel superseded runsAdd concurrency groups to cancel superseded runsCodeAnt-AI Description
Shell help overhaul, resilient CI jobs, and clearer build output & completions
What Changed
Impact
✅ Clearer build failure output✅ Fewer CI wasted runs (retries + test-gated benchmarks)✅ Immediate tab completion setup on first run💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.